专利摘要:
The invention is characterized in the steps: a) identifying an edge area (10) in the image; b) identifying a first direction (Al) in the image for the edge area (10); c) identifying a local, connected edge area (11) the width of which has one single global maximum; d) identifying a second direction (A2) for the local edge area (11); e) identifying a supplemental area (12) around the local edge area (11); f) identifying the characteristic part of the image so that it comprises the local edge area (11) and the supplemental area (12).
公开号:SE1150522A1
申请号:SE1150522
申请日:2011-06-08
公开日:2012-12-09
发明作者:Ingvar Lindgren;Mikael Loewgren
申请人:Imtt Svenska Ab;
IPC主号:
专利说明:

15 20 25 30 meaning that the part can be used to, for example, identify or compare an image. instead of using the whole image. If the characteristic part contains information that is representative or descriptive of the image as a whole, it is more effective in different image analyzes to take into account only the characteristic part rather than the whole image.
A jproblen1 is to be able to automatically identify relevant characteristic parts in images in a consistent and repeatable way.
WO2006085861 describes a method of identifying or indexing a digitally stored image, in which characteristic parts are identified as edge areas, defined as sets of pairs of pixels with high light contrast.
When performing an analysis based on a characteristic image part, for example by comparing it with a corresponding characteristic image part in another image, problems often arise when the image in which one characteristic part is included changes angle, for example by perspective shift, ring, rotation, resolution change, cropping, color changes and so on. Examples include comparison between two images by means of respective characteristic parts where one image is a variant of the other image that has been cropped or rotated; the identification of deviating images in an image material by means of respective characteristic parts where certain images have deviating resolution but where the resolution itself is not the property whose deviation is what is meant by the identification; and grouping of a large number of images according to motifs based on the respective characteristic comprises the same 'motif' but shown parts, where certain images from slightly different angles or with partly different backgrounds.
Application text docc 2011-06-08 090066EN 10 15 20 25 30 The common problem in these cases is to provide a method for identifying a characteristic image part in a consistent and repeatable manner, for the purposes described above, and which is as unaffected as possible. of image changes of the types listed above.
The present invention solves the problems described above.
Thus, the invention relates to a method for identifying a characteristic part of an image, wherein the image is caused to be stored in digital format in the form of a matrix composed of pixels, and is characterized in that the steps are performed to: a) identify an edge area in the image consisting of a continuous amount of edge pixels which together form an edge area in the image; b) identify a first direction in the image for the edge area; c) identify a local edge area consisting of a continuous set of pixels, for which local edge area the width along) the first direction lacks local maximums except for a single global maximum; d) identify a second direction for the local edge area; e) identifying an additional area around the local edge area that includes pixels located in close proximity to a peripherally located boundary pixel of the local edge area and further away from the boundary pixel up to a distance in the other direction from the boundary pixel proportional to the local the width of the edge area in the other direction at the characteristic part of the boundary pixel; f) identify the image to include the local border area and the additional area, and store the characteristic part in digital form on a storage medium.
Application text dococ 2011-06-08 090066EN 10 15 20 25 30 The invention will now be described in detail, with reference to exemplary embodiments of the invention and the accompanying drawings, in which: Figure 1 is an enlarged detail in a digitally stored image in which a method according to the present invention is practiced; Figure 2 is a graph illustrating a threshold value for a comparison parameter according to the present invention; Figure 3 is a flow chart illustrating the identification of an edge area in accordance with the present invention; Figure 4 is a schematic illustration of an identification of a characteristic area according to the present invention; and Figure 5 is a flow chart illustrating the identification of a characteristic part in accordance with the present invention.
Figure 1 shows a greatly enlarged section 1 of an image.
In accordance with the invention, the image is stored in digital format on a digital storage medium in the form of a matrix made up of pixels. This means that the image can be an image whose model is an analogously stored image, for example stored on a photographic file. In this case, a conventional digitization of the image is performed before it is used in a method according to the present invention. The storage medium can be any conventional digital storage medium, such as a hard disk, an internal memory of one or more computers or a CD.
In Figure 1, the section is so enlarged that the individual pixels are clearly visible. A grid is added, solely for the purpose of further increasing the clarity of the figure.
Application text document 2011-06-08 090066EN 10 15 20 25 30 Figure 3 briefly illustrates the different steps performed at. an identification of an edge area in accordance with a preferred embodiment of the present invention.
In a first step 101, you sweep over the entire image, pixel by pixel, and calculate for each pair of pixels in the image, where one. pixels. is j_ a predetermined. local environment to the second pixel, a comparison parameter that constitutes the difference between the two pixels regarding an intensity property over one or more channels.
It is preferred that the intensity property difference be defined as the absolute difference in the value of the intensity property. For example, the difference may be the absolute difference in luminance of two pixels.
It is also preferred that a difference in intensity between the pixel in question and each of the pixels included in a predefined local environment be compared to the pixel. The environment is preferably symmetrical about the pixel in question, and may, depending on, among other things, performance requirements, 4, 8, 12, 20, for example consist of 24 or more adjacent pixels.
Figure 1 illustrates a pixel 4 for which the difference in intensity in comparison with each of the pixels 3a-3h in an 8 pixel 3x3 environment 3 (the middle pixel 4 is the comparison pixel and does not belong to the environment 3).
Thus, for each pixel 4 in the image, an intensity difference between the pixel 4 and each of the pixels 3a-3h in an environment 3 is compared to the pixel 4. According to a preferred embodiment. each ambient pixel 3a-3h defines a direction in the image in relation to the pixel 4. Table 1 shows the different directions, expressed as. angles in relation to the geometry of the image, in relation to the pixel. 4 son1 defines the different ambient pixels 3a-Bh in figure 1. It is of course possible to use fewer or more directions, but it is preferred that the directions are evenly distributed between 0 and 360 °.
Table 1 Ambient Pixels Direction 3a 270 ° 3b 3l5 ° 3c 0 ° 3d 45 ° 3e 225 ° 3f l80 ° 3g l35 ° 3h 90 ° According to the presently preferred embodiment, a comparison parameter is calculated for each pixel 4 and for each direction used, with respect to an intensity property. over one or more channels, which is the difference between the pixel 4 and the ambient pixel defining the direction in question. A "channel" in this context is a numerical indication of a certain type of intensity, such as luminance, reflectance or the like.
Examples include an RGB-coded image, where light intensity in the red, green and blue color bands is indicated numerically by each of three channels, respectively, and a gray scale image, where gray scale is indicated numerically in a single channel.
Application text document 2011-06-08 090066EN 10 15 20 25 30 According to a preferred embodiment, the intensity property of a certain pixel is the total luminance of the pixel in question over the entire visible color spectrum. In case the original image contains color information, it is preferred that the image in an initial step, for example in connection with the above-described possible digitization, be converted to a grayscale image, in which the total luminance for a certain pixel consists of the value of its gray tone. By disregarding color information in the image in this way, a characteristic part of the image can be identified in a way that is repeatable even in some cases where the color information of the original image has been manipulated.
For all pixels in the image, the calculated comparison value is stored for each of the surrounding pixels in the environment for each respective pixel. The amount of comparative values thus obtained can be illustrated for each of the directions used in the image in a graph of the type illustrated in Figure 2, which along an axis d shows the difference in intensity property and along. another axis n shows the number of pixels in the image that show this intensity property difference for the current direction. In other words, eight such graphs can be created for the sweep illustrated in Figure 1 using a 3x3 pixel environment. is then identified, i. a one For each graph step l02, threshold value dl, which is the first local minimum of the function n (d) which for a certain direction is illustrated in figure 2. When calculating the value dl it is preferred that first perform a filtering of the function f (n) so that it becomes smoother and / or introduce a certain tolerance parameter that ignores minor fluctuations in the value of the function. This assurance that it is the first more markedly prominent minimum that is selected as the threshold value dl.
In a step 103, an edge area in the image is then identified as a contiguous area of edge pixels. ell! The expression "edge pixel" in this context is to be interpreted as the pixels for which the comparison parameter at a comparison with another pixel in step 101 was found to be greater than the threshold value d1. In other words, edge pixels can be said to be the pixels in the image that show the highest contrast in comparison with their respective local environments. The same image may include any number of such contiguous areas, but it is preferred that each contiguous and isolated area be defined as a single edge area, possibly using certain thresholds in the form of, for example, the minimum allowable size for an edge area or the minimum allowable size. for connections between edge areas. Although it is preferable to perform the comparisons and determinations of threshold values as described above in each of certain predetermined directions in the image, it is also possible to perform the intensity property comparison between the pixel 4 and the surrounding pixels 3a-3h without taking into account directions. which in the case illustrated in Figure 1 would result in 8 different comparison values for each pixel in the image, and where all these comparison values are analyzed in one and the same function f (n), so that only a threshold value d1 will be determined.
In other words, if directions are used, a certain pixel 4 will be considered an edge pixel if the intensity difference between the pixel 4 and any of its surrounding pixels 3a-3h is greater than the threshold value for the direction defined by Application text docs 2011-06-08 090066EN 10 15 20 25 30 the surrounding pixel in question. In the case where directions are not used, on the other hand, the pixel 4 will be considered an edge pixel if the intensity property difference between the pixel 4 and one of the surrounding pixels 3a-Bh is greater than the threshold value common to all directions.
According to a preferred embodiment, each edge pixel thus identified is associated with a binary number, in which each bit corresponds to a certain (0 or 1) direction, and where the value of the respective bit relates to whether the comparison parameters of the edge pixel in question in the direction corresponding to the bit was found to be greater than the threshold value for at least one of the predetermined directions. If, for example, the pixel 4 in Figure 1 was found to be an edge pixel because its difference in intensity in comparison with the surrounding pixels 3d, 3e and 3g was larger than the threshold values calculated for these three directions, the binary number can be calculated according to Table 2: Table 2 Ambient pixel Direction Bit value 3a 270 ° 0 3b 315 ° 0 3c 0 ° 0 3d 45 ° 1 3e 225 ° 1 3f 180 ° 0 3g 135 ° 1 3h 90 ° 0 Thus the binary number 00011010, which corresponds to the decimal number 26. A such representation, where a unique application text document 2011-06-08 090066EN 10 15 20 25 30 10 binary number identifies the directions along which the edge pixel. 4 in question meets the "criterion for being considered an edge pixel, allows efficient search among identified edge pixels because each edge pixel is associated with information son1 indicates in which directions adjacent: pixels are at least likely also edge pixels.
According to the invention, an identified edge area, consisting of a continuous amount of edge pixels which together form an edge area in the image, for example between two adjacent fields, is used to identify a characteristic area. It is preferred but not necessary to use a method in accordance with. the son1 described above to identify the edge area. By identifying as pixel pixels such pixels that have a large intensity property difference in comparison with other pixels in a local environment, an edge area is created which effectively captures relevant parts of the image and is in many ways independent of the types of changes to the original image described. above. By also performing the analysis in a number of predetermined directions, the precision is improved.
The flow chart in Figure 5 illustrates a method according to the present invention, where the identification. of the edge area constitutes a first step 201.
Figure 4 shows in a matrix a certain edge area 10 identified in step 201 for a certain image. The image information itself is not displayed, for the sake of clarity, but only the extent of the edge area 10 in the pixel matrix of the image. The pixels belonging to the identified edge area 10 are marked with a gray background.
Application text doc 2011-06-08 090066EN 10 15 20 25 30 ll In accordance with the invention, in a step 202 a first is identified. direction 'Al i. the image. The first direction A1 can be any direction 'son1, and. may in particular be a direction in relation to the axial directions of the image or in relation to a certain geometric property of the identified edge area, such as its longest diameter. The direction A1 illustrated in Figure 4 is one of the axis directions of the image. then the whole identified In a step 203, the edge area 10 is traversed along the first direction A1, in the exemplary embodiment illustrated in Figure 4 from below and upwards in the figure. For each pixel along. in the first direction A1, the width of the edge area 10 at the position of the pixel in question is examined. The term “width” in this context means an additional direction that is not parallel to. the first direction. Al. This width direction is preferably perpendicular to the first direction A1, but can also be constituted, for example, by an axis direction not parallel to A1 in the image.
By such successive examinations of the width along the first direction A1, a local edge area 11 is identified in a step 203. For each width examined, the pixels located along the width in question are added to the local edge area ll.
According to the invention, the local edge area l1 is identified so that the width of the local edge area ll along the first direction A1 lacks local maximums except for a single global maximum. This can be practically done by. the throughput continues to the next width along the first direction A1 as long as the width does not start to increase again after first Application textdocx 2011-06-08 090066EN 10 15 20 25 30 l2 has decreased. With. other words. the identification of the local edge area ll is interrupted when the first local. the minimum of the width has been reached along the first direction A1, alternatively when the last width in the edge area l0 has been reached.
According to a preferred embodiment, the previously stored binary numbers for each respective edge pixel are used to perform the traversal of the edge area 10. This is done so that the edge area 10 is traversed pixel by pixel, where the bit in the binary number corresponding to a certain direction determines whether the traversal is to continue in this direction or not in such a way that "O" means that the bit value does not pass continue in that direction. Such a method results in rapid traversal of all the pixels in an edge area 10.
The local edge area 11, thus identified, is illustrated in Figure 4 by means of broken lines.
Thereafter, additional local edge areas can be identified based on the possible parts of an identified edge area 10 which do not already form part of an identified local edge area 11. The procedure can also continue for all identified edge areas in the image, so that finally all identified edge areas have been divided into a number of local edge areas.
In a step 204, a second direction A2 is then identified for the local edge area 11. This second direction A2 can, in a manner similar to that of the first direction A1, be either related to the geometry of the image or the local edge area 11, for example to a longest diameter of the local edge area 11 or to an axis direction of the image.
In particular, the second direction A1 can be chosen to be the width Application text document 2011-06-08 090066EN 10 15 20 25 30 13 as. discussed above or in the same direction as. the first direction A1. In the exemplary embodiment illustrated in Figure 4, the second direction A1 is selected to the width discussed above.
After this, in a step 205, an additional area 12 is identified around the local edge area 11. The additional area 12 is illustrated in Figure 4 with a dashed background. According to the invention, in identifying the additional area 12, pixels 14a, 14b located immediately adjacent to a peripherally located boundary pixel 13 of the local edge area 11 and further away from the boundary pixel 13, along the second direction A2, up to a certain distance are selected. in the other direction A2 from the boundary pixel 13.
The term "boundary pixel" herein means such a pixel as in the other direction. A2 is adjacent to a pixel 14b which. is not included in the local edge area 11. el 13 For a certain such edge pixel, the adjacent pixel 14b which is not included in the local edge area 12 is thus added to the additional area 12. In addition, the additional area is added. 12 next. pixel 14a in the second direction A2, and so on, up to the said distance from the boundary pixel 13. According to the invention, the distance is proportional to the local. the width of the edge area 11. in the other direction A2 at the boundary pixel 13. The proportion between the distances of the additional area 12. and. the width of the local edge region 11 may be any suitable constant, according to a preferred embodiment between 1 and 5, more preferably between 1 and 3, preferably 1. In the present exemplary embodiment the constant is used. 1, which is. means that the distance is selected to be the same as the width.
Application text docs 2011-06-08 090066EN 10 15 20 25 30 l4 For the illustrated boundary pixel l3, the width at the boundary pixel l3 in the other direction A2 is two pixels, so two additional pixels l4a, l4b, which are not included in the local edge area l1, is selected to be included in the additional area l2 in the other direction, A2. But the size of the distance is thus variable over different boundary pixels. As is clear from Figure 4, the distance varies between 1 and 5 pixels for different boundary pixels.
According to a preferred embodiment, additional areas are identified in this way for all boundary pixels in the local edge area 11, and all additional areas thus identified together form the additional area for the local edge area 11.
This results in high repeatability in the identification of a characteristic part.
According to the invention, the characteristic part of the image is then identified so as to include the local edge area and the additional area. In other words, the characteristic part will constitute a coherent area of the image, comprising partly edge pixels and partly additional pixels. The characteristic part is then stored * in digital form on a storage medium, which can be any conventional storage medium as described above.
Thus, the method of the present invention is based on an edge area in an image. Such edge areas can generally be automatically identified for different images in a highly repeatable manner. Eftersonx the interesting information in photographic images in many cases tend to be associated with. parties * with. relatively high contrast and / or ~ as. constitute boundaries between fields present in the image, for example between foreground and background in the image, the edge areas identified by an automatic algorithm will generally be located in portions of the image which are interesting from, for example, a point of view of comparison. Particularly good repeatability has been achieved by the present inventors by means of the method described above for identifying an edge area. By. to select a local edge area with only a local maximum, spatially limited, distinct edge areas of appropriate size can be obtained quickly and in a repeatable manner.
By identifying a characteristic part such as the union of an edge area and an additional area in the manner indicated above, the characteristic part will also include portions of the image surrounding an edge area. It has surprisingly been found that such surrounds often contain highly relevant image information. By selecting the additional area whose dimensions an image dimensions are proportional to the edge area, a characteristic part is obtained whose extent in Med. In other words, it will become highly repeatable. of 'the picture with. if the image approximately the same portion of the image information to be identified has also been modified with respect to resolution, angular changes, focusing, rotation, color settings, etc. Since the edge area is identified solely on the basis of the image content itself, and since the additional area is identified on basis of the edge area, there are no dependencies on, for example, the external geometry of the image matrix.
Finally, this means that characteristic parts identified by a method according to the present invention in all probability give repeatable results when used for a wide season1 automatic spectrum spekt1 of different purposes, comparisons between images.
Application text doc 2011-06-08 090066EN 10 15 20 25 30 l6 It is preferred that the digitally stored characteristic part is brought to include information regarding which pixels constitute edge pixels, and also the original pixel information from the image. The storage. of 'a characteristic part in this way allows the part to be easily used in analyzes such as. also takes into account which pixels constitute edge pixels. In many applications, this can be an interesting parameter in, for example, comparisons between images, since these pixels can, for example, mark boundaries between fields in the image.
According to a preferred embodiment, not only a first direction is selected, but two first directions. These directions may, in the same manner as described above for the first direction, be any directions related to the dimensions of the image or to the dimensions of the identified edge area, as long as the first two directions are not parallel. Then the identification of the local edge area is performed in a corresponding manner such as. described above, but using both of the first directions separately. Thus, the local edge area is selected as part of the identified edge area that only has a local maximum along both the respective first directions. This results in higher repeatability when identifying characteristic parts even in the case of the image being rotated and also in the case of elongated edge areas. It is preferred that the first two directions be perpendicular. Examples include the direction along the longest diameter of the edge area and its normal direction, respectively, as well as two perpendicular axis dimensions of the image. The latter in particular enables fast calculations in an automated procedure in combination with high consistency and repeatability.
Application text document 2011-06-08 090066EN 10 15 20 25 30 l7 In a similar way, according to a preferred embodiment, not only a second direction but two other directions are selected. These two other directions can also, as described above for the other direction, be any directions that are related to the dimensions of the image or to the dimensions of the identified edge area. However, they must not be parallel. Thereafter, the identification of the additional area is performed in a corresponding manner as described above for the other direction, but using both the other directions.
Thus, an additional area is first identified by means of one of the other directions. Then, in a corresponding manner, an additional additional area is identified by means of the other the two other directions. Finally, the additional area for the characteristic part is selected as. Union of these identified additional areas. Such a method gives further improved consistency and repeatability, since the additional area will not * change its antiquity appreciably even if the image has, for example, been rotated in relation to its original appearance.
If further improved repeatability is desired, it is further possible to use further first and second directions, respectively.
According to a preferred embodiment1, the image is. a stand-alone photographic image or a photographic image included in a movie sequence. According to another preferred embodiment, the image is a microscopic image or an image which is the result of a measurement of the internal structure of an object, such as an X-ray image of a patient or an article to be inspected for material defects, or alternatively a cross-section of an object. The three-dimensional structure of the patient, such as an NMR image of tissue in a patient.
Application text docs 2011-06-08 090066EN l8 Preferred embodiments have been described above. However, it will be apparent to those skilled in the art that many changes may be made to the described embodiments without departing from the spirit of the invention. Thus, the invention should not be limited to the described embodiments, but may be varied within the scope of the appended claims.
Application text document 2011-06-08 090066EN
权利要求:
Claims (9)
[1]
A method for identifying a characteristic part of an image (1), wherein the image (1) formed in the form of a matrix made up of pixels is caused to be stored in digital cognition by the following steps: a) identifying an edge area ( 10) in the image (1) consisting of a continuous amount of edge pixels which together form an edge area (10) in the image; b) identifying a first direction (A1) in the image (1) for the edge area (10); c) identifying a local edge area (11) consisting of a contiguous amount of pixels, for which local edge area (11) (A1) the width along. the first direction lacks local n fi maxima except for a single global maximum; d) identify a second direction (A2) for the local edge area (l1); e) identifying an additional area (12) around the local edge area (11) which comprises pixels (14a, 14b) which are located in immediate connection with a boundary pixel of (11) located peripherally (13) the local edge area (13) (A2) and further away from the boundary pixel up to a distance in the second direction (13) from the boundary pixel as local (11) (13): is proportional to the width of the edge area in the other direction (A2) at the boundary pixel f) identify the characteristic part of the image (1) (11) so as to include the local edge area and the additional area (12), and store the characteristic part in digital form on a storage medium. Application text document 2011-06-08 090066EN 10 15 20 25 30 20
[2]
A method according to claim 1, characterized in that in step f), the digital storage of the characteristic part is made to include the original pixel information from the image and information regarding which pixels in the characteristic part constitute edge pixels.
[3]
A method according to claim 1 or 2, characterized in that in step a) the edge area (10) is made to be identified by a method comprising the steps of: i. For all pairs of pixels in the image where one pixel (3a-3h) is located itself in a predetermined local environment (4), which constitutes (3) to the second pixel determining the value of a comparison parameter the difference between the two pixels regarding an intensity property over one or more channels; (dl) (dl) ii. determine a threshold value for the comparison parameter, where the threshold value is the first local minimum with respect to the comparison parameter that occurs in a function (f) which for each value of the comparison parameter indicates the number of compared pixel pairs in the image shown in step i) this value for the comparison parameter, where the function (f) may first be filtered to reduce the effect of noise in determining the first local minimum (dl); iii. identify. the edge area (10) is a continuous area of edge pixels in which all edge pixels consist of pixels for which the comparison parameter when compared with another pixel in step i) was found to be greater than the threshold value (d1).
[4]
A method according to claim 3, characterized in that steps i) and ii) are performed for each of a set of angularly evenly distributed, predetermined Application text docs 2011-06-08 090066EN 10 15 20 25 30 2l directions * in the figure , so that for each such predetermined direction first in step i) each pixel in the image is compared with an adjacent pixel along the direction, and then in step ii) a threshold value is determined for the pixel pairs compared in step i) and so that a respective threshold value obtained for each (l0) in pure, predetermined. direction, and. that the edge area step iii) is identified as a coherent area edge pixels where all edge pixels are pixels for which the comparison parameter when compared to another pixel in step i) was found to be greater than the threshold of the direction of the other pixel for at least one of the predetermined directions.
[5]
A method according to claim 4, characterized in that each edge pixel is associated with it. a binary represented number, in which each bit corresponds to a certain direction, and where the value of the respective bit (0 or 1) refers to whether the comparison parameter of the edge pixel in question in the direction corresponding to the bit was found to be greater than the threshold value for that direction , so that a binary number for all edge pixels is in this way caused to store information as to along which directions the edge pixel in question meets the criterion of being an edge pixel.
[6]
Method according to one of Claims 3 to 5, characterized in that the intensity property. for a pixel, the total luminance of the pixel in question.
[7]
A method characterized according to claim 6, wherein the image (1) in a first step preceding step a) is caused to be converted to a grayscale image, in which the total luminance of a pixel is constituted by the value of the grayscale of the pixel in question. . Application text document 2011-06-08 090066EN 10 15 20 25 30 22
[8]
Method according to one of the preceding claims, characterized in that step e) is performed for all boundary pixels in the locality. the edge area (ll) which in the other direction (A2) borders on a pixel in the image (1) which is not included in the edge area (10), and by the fact that the characteristic part of the image (1) is made to include all the identified additional areas to the local edge area (ll).
[9]
Method according to one of the preceding claims, characterized in that in step b) two different first directions are to be selected, and in that in step (c) the step c) is brought to the edge area so that it lacks local maxima in addition to a respective global maximum in both the first directions separately. lO. A method according to claim 9, characterized in that the first two directions. are made up of two mutually perpendicular coordinate axes in the image matrix. ll. A method according to any one of the preceding claims, characterized in that in step d) two different other directions are caused to be selected, (12) and in that in step e) the additional area is caused to be identified comprising pixels which are immediately (11) along each respective second direction are located adjacent to a boundary pixel of the local edge area and further away from the boundary pixel up to a distance in the respective other direction from the boundary pixel which is proportional to the width of the local edge area (II) in the respective other direction at the boundary pixel . l2. Method according to claim 11, characterized in that the other two directions are made to consist of two perpendicular coordinate axes in the image matrix. Application text document 2011-06-08 090066EN
类似技术:
公开号 | 公开日 | 专利标题
Gong et al.2014|Interactive Shadow Removal and Ground Truth for Variable Scene Categories.
US8503767B2|2013-08-06|Textual attribute-based image categorization and search
JP2006053919A|2006-02-23|Image data separating system and method
Soille2006|Morphological image compositing
US9177222B2|2015-11-03|Edge measurement video tool and interface including automatic parameter set alternatives
AU2014262134B2|2019-09-26|Image clustering for estimation of illumination spectra
CN103808263B|2016-05-11|The high-flux detection method of Grain rice shape parameter
CN105556568A|2016-05-04|Geodesic saliency using background priors
Gong et al.2016|Interactive removal and ground truth for difficult shadow scenes
WO2014156425A1|2014-10-02|Method for partitioning area, and inspection device
US11189022B2|2021-11-30|Automatic detection, counting, and measurement of logs using a handheld device
SE534089C2|2011-04-26|Procedure for classifying image information
SE1150522A1|2012-12-09|Procedure for identifying a characteristic part of an image
Šaškov et al.2015|Comparison of manual and semi-automatic underwater imagery analyses for monitoring of benthic hard-bottom organisms at offshore renewable energy installations
CN109035170A|2018-12-18|Adaptive wide-angle image correction method and device based on single grid chart subsection compression
US20140160155A1|2014-06-12|Inserting an Object into an Image
JP6100602B2|2017-03-22|Long object number measuring device, long object number measuring method, and computer program
Abuhasel et al.2015|A commixed modified Gram-Schmidt and region growing mechanism for white blood cell image segmentation
SE1150523A2|2013-10-01|
Stamm et al.2017|In silico methods for cell annotation, quantification of gene expression, and cell geometry at single-cell resolution using 3DCellAtlas
Pramulyo et al.2017|Towards better 3D model accuracy with spherical photogrammetry
CN104537633B|2017-07-11|A kind of method that utilization image fusion technology eliminates the anti-shadow of image
Tran et al.2018|Determination of Injury Rate on Fish Surface Based on Fuzzy C-means Clustering Algorithm and L* a* b* Color Space Using ZED Stereo Camera
US20210158530A1|2021-05-27|Computer-implemented method for segmenting measurement data from a measurement of an object
SE1050937A1|2012-03-11|Procedure for automatically classifying a two- or high-dimensional image
同族专利:
公开号 | 公开日
SE535888C2|2013-02-05|
WO2012169964A1|2012-12-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US6639593B1|1998-07-31|2003-10-28|Adobe Systems, Incorporated|Converting bitmap objects to polygons|
US7333648B2|1999-11-19|2008-02-19|General Electric Company|Feature quantification from multidimensional image data|
JP2007140684A|2005-11-15|2007-06-07|Toshiba Corp|Image processor, method and program|
JP4966077B2|2007-04-12|2012-07-04|キヤノン株式会社|Image processing apparatus and control method thereof|
JP5229744B2|2007-12-03|2013-07-03|国立大学法人北海道大学|Image classification device and image classification program|
JP5376906B2|2008-11-11|2013-12-25|パナソニック株式会社|Feature amount extraction device, object identification device, and feature amount extraction method|CN104240228B|2013-06-24|2017-10-24|阿里巴巴集团控股有限公司|A kind of detection method and device of particular picture applied to website|
法律状态:
优先权:
申请号 | 申请日 | 专利标题
SE1150522A|SE535888C2|2011-06-08|2011-06-08|Procedure for identifying a characteristic part of an image|SE1150522A| SE535888C2|2011-06-08|2011-06-08|Procedure for identifying a characteristic part of an image|
PCT/SE2012/050620| WO2012169964A1|2011-06-08|2012-06-08|Method for identifying a characteristic part of an image|
[返回顶部]